Generating and Estimating Nonverbal Alphabets for Situated and Multimodal Communications
نویسندگان
چکیده
In this paper, we discuss the formalized approach for generating and estimating symbols (and alphabets), which can be communicated by the wide range of non-verbal means based on specific user requirements (medium, priorities, type of information that needs to be conveyed). The short characterization of basic terms and parameters of such symbols (and alphabets) with approaches to generate them are given. Then the framework, experimental setup, and some machine learning methods to estimate usefulness and effectiveness of the nonverbal alphabets and systems are presented. The previous results demonstrate that usage of multimodal data sources (like wearable accelerometer, heart monitor, muscle movements sensors, braincomputer interface) along with machine learning approaches can provide the deeper understanding of the usefulness and effectiveness of such alphabets and systems for nonverbal and situated communication. The symbols (and alphabets) generated and esrtimated by such methods may be useful in various applications: from synthetic languages and constructed scripts to multimodal nonverbal and situated interaction between people and artificial intelligence systems through Human-Computer Interfaces, such as mouse gestures, touchpads, body gestures, eyetracking cameras, wearables, and brain-computing interfaces, especially in applications for elderly care and people with disabilities.
منابع مشابه
Dancing the night away: controlling a virtual karaoke dancer by multimodal expressive cues
In this article, we propose an approach of nonverbal interaction with virtual agents to control agents’ behavioral expressivity by extracting and combining acoustic and gestural features. The goal for this approach is twofold, (i) expressing individual features like situated arousal and personal style and (ii) transmitting this information in an immersive 3D environment by suitable means.
متن کاملTechnical and Design Challenges in Multimodal, Situated Human-Robot Dialogue
Allison Sauppé, Bilge Mutlu Department of Computer Sciences University of Wisconsin–Madison 1210 West Dayton Street Madison, WI 53706 USA [email protected], [email protected] Abstract As robotic products become more ubiquitous in society, fulfilling such roles as teachers and assembly-line workers, they will need to be capable of conversing with their users using spoken and nonverbal language...
متن کاملAchieving Multimodal Cohesion during Intercultural Conversations
How do English as a lingua franca (ELF) speakers achieve multimodal cohesion on the basis of their specific interests and cultural backgrounds? From a dialogic and collaborative view of communication, this study focuses on how verbal and nonverbal modes cohere together during intercultural conversations. The data include approximately 160-minute transcribed video recordings of ELF interactions ...
متن کاملGenerating Verbal and Nonverbal Utterances for Virtual Characters
We introduce an approach to multimodal generation of verbal and nonverbal contributions for virtual characters in a multiparty dialogue scenario. This approach addresses issues of turn-taking, is able to synchronize the different modalities in real-time, and supports fixed utterances as well as utterances that are assembled by a full-fledged treebased text generation algorithm. The system is im...
متن کاملThe furhat social companion talking head
In this demonstrator we present the Furhat robot head. Furhat is a highly human-like robot head in terms of dynamics, thanks to its use of back-projected facial animation. Furhat also takes advantage of a complex and advanced dialogue toolkits designed to facilitate rich and fluent multimodal multiparty human-machine situated and spoken dialogue. The demonstrator will present a social dialogue ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1712.04314 شماره
صفحات -
تاریخ انتشار 2017